Goto

Collaborating Authors

 Tepic


Towards Dog Bark Decoding: Leveraging Human Speech Processing for Automated Bark Classification

Abzaliev, Artem, Espinosa, Humberto Pérez, Mihalcea, Rada

arXiv.org Artificial Intelligence

Similar to humans, animals make extensive use of verbal and non-verbal forms of communication, including a large range of audio signals. In this paper, we address dog vocalizations and explore the use of self-supervised speech representation models pre-trained on human speech to address dog bark classification tasks that find parallels in human-centered tasks in speech recognition. We specifically address four tasks: dog recognition, breed identification, gender classification, and context grounding. We show that using speech embedding representations significantly improves over simpler classification baselines. Further, we also find that models pre-trained on large human speech acoustics can provide additional performance boosts on several tasks.


Context Generation Improves Open Domain Question Answering

Su, Dan, Patwary, Mostofa, Prabhumoye, Shrimai, Xu, Peng, Prenger, Ryan, Shoeybi, Mohammad, Fung, Pascale, Anandkumar, Anima, Catanzaro, Bryan

arXiv.org Artificial Intelligence

Closed-book question answering (QA) requires a model to directly answer an open-domain question without access to any external knowledge. Prior work on closed-book QA either directly finetunes or prompts a pretrained language model (LM) to leverage the stored knowledge. However, they do not fully exploit the parameterized knowledge. To address this issue, we propose a two-stage, closed-book QA framework which employs a coarse-to-fine approach to extract relevant knowledge and answer a question. Our approach first generates a related context for a given question by prompting a pretrained LM. We then prompt the same LM for answer prediction using the generated context and the question. Additionally, to eliminate failure caused by context uncertainty, we marginalize over generated contexts. Experimental results on three QA benchmarks show that our method significantly outperforms previous closed-book QA methods (e.g. exact matching 68.6% vs. 55.3%), and is on par with open-book methods that exploit external knowledge sources (e.g. 68.6% vs. 68.0%). Our method is able to better exploit the stored knowledge in pretrained LMs without adding extra learnable parameters or needing finetuning, and paves the way for hybrid models that integrate pretrained LMs with external knowledge.


Generative Long-form Question Answering: Relevance, Faithfulness and Succinctness

Su, Dan

arXiv.org Artificial Intelligence

In this thesis, we investigated the relevance, faithfulness, and succinctness aspects of Long Form Question Answering (LFQA). LFQA aims to generate an in-depth, paragraph-length answer for a given question, to help bridge the gap between real scenarios and the existing open-domain QA models which can only extract short-span answers. LFQA is quite challenging and under-explored. Few works have been done to build an effective LFQA system. It is even more challenging to generate a good-quality long-form answer relevant to the query and faithful to facts, since a considerable amount of redundant, complementary, or contradictory information will be contained in the retrieved documents. Moreover, no prior work has been investigated to generate succinct answers. We are among the first to research the LFQA task. We pioneered the research direction to improve the answer quality in terms of 1) query-relevance, 2) answer faithfulness, and 3) answer succinctness.


ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning

Qin, Yujia, Lin, Yankai, Takanobu, Ryuichi, Liu, Zhiyuan, Li, Peng, Ji, Heng, Huang, Minlie, Sun, Maosong, Zhou, Jie

arXiv.org Artificial Intelligence

Pre-trained Language Models (PLMs) have shown strong performance in various downstream Natural Language Processing (NLP) tasks. However, PLMs still cannot well capture the factual knowledge in the text, which is crucial for understanding the whole text, especially for document-level language understanding tasks. To address this issue, we propose a novel contrastive learning framework named ERICA in pre-training phase to obtain a deeper understanding of the entities and their relations in text. Specifically, (1) to better understand entities, we propose an entity discrimination task that distinguishes which tail entity can be inferred by the given head entity and relation. (2) Besides, to better understand relations, we employ a relation discrimination task which distinguishes whether two entity pairs are close or not in relational semantics. Experimental results demonstrate that our proposed ERICA framework achieves consistent improvements on several document-level language understanding tasks, including relation extraction and reading comprehension, especially under low resource setting. Meanwhile, ERICA achieves comparable or better performance on sentence-level tasks. We will release the datasets, source codes and pre-trained language models for further research explorations.


Low-pitched, rumbling rocks could help predict when earthquakes strike, research says

The Japan Times

TEPIC, MEXICO – Rocks under increasing pressure before earthquakes strike send out low-pitched rumbling sounds that the human ear cannot detect but could be used to predict when a tremor will strike, scientists said Monday. Researchers recreated powerful earthquake forces in a laboratory and used high-tech algorithms to pick out the acoustic clues amid all the other noise of a pending quake, according to findings published in Geophysical Research Letters, a journal published by the American Geophysical Union. The sounds are emitted typically a week before an earthquake occurs, so deciphering them would allow scientists to pinpoint the timing of a tremor, the research paper said. Scientists currently can calculate the probability of an earthquake in a particular area but not when it will happen, according to the U.S. Geological Survey. "People have said you can't predict earthquakes. We're now saying we believe for the first time we can predict an earthquake in a laboratory," said Colin Humphreys, professor of materials science at Cambridge University and one of the paper's authors.